2 - 31. What did we learn in AI 1/2? (Part 2) [ID:30402]
50 von 126 angezeigt

The kind of upshot here is that if we have a concept like utility, then we can define

something that we call rationality, which is really optimizing the expected utility.

That's a very, very powerful thing. That's something we've done all this semester.

If you optimize expected utility, that's something that's very hard to, rationality,

as we call it, is very hard to distinguish from intelligence. Actually,

it's indistinguishable from intelligence. If you think about it, intelligence,

intelligent behavior always optimizes something. If somebody is blatantly

under-optimizing in a way we don't understand, then we usually say, oh, that's

dumb. The model we had, still have, is rationality as our operational

model for intelligence. Rationality means we're optimizing the expected utility.

Since utility might change, we have to do learning. Since we have only partially

observable and non-deterministic environments, rationality does not mean

or does not entail actual optimality. If you are in a deterministic,

fully observable world, then the only way to be rational is to be optimal.

But in a non-deterministic, partially observable world, there is no way of

being optimal. So we'll settle for rational instead, where you factor in the

uncertainty. So in particular, that doesn't mean that to be intelligent you

have to be omniscient. You don't have to know everything. You have to know or be

able to acquire knowledge just enough that you can actually optimize. And these

questions we've talked about, like the value of information, should I actually

learn something, comes into play here. There's this bus coming at me. Should I

actually write a doctoral thesis about whether it's better to jump left or jump

right? The answer is clearly no, because that takes longer than you have to live.

And so you do not have to be omniscient. You do not have to be clairvoyant, which

means you don't have to kind of be able to observe stuff that you can't really

observe, because we're only optimizing within the bounds that our restricted

knowledge and restricted observation capability gives us. What you do have is

to basically explore and acquire knowledge, because that is actually part

of our framework. If we can explore to be rational, we have to explore under

certain constraints. So being rational doesn't mean necessarily that we

are successful. There are all these problems of being rational. One of them is

the prisoner's dilemma, Bonnie and Clyde. They've been caught and they're

both separately being offered a deal. We can actually get you into prison with

what we know for one year. If you actually tell on your partner, then we'll

give you half a year and the partner ten years. If you think about it, think about

your options. Say you're Bonnie. There's two things. Either Clyde tells on me or

he doesn't. I have two options, telling on Clyde and not telling. In both

cases, it's better for me if I tell on Clyde. Yes. Then this problem goes

away. I want to make this problem so that it is rational that you tell on

your partner, even though it's clear from a global perspective. Then if both follow

this line of reasoning, both get ten years, which is the worst thing that could

happen to them. The best thing would be that both get one year. But if you

only look from your own perspective, your rational perspective, without having any

predictions about what your partner does, then of course it's rational to tell on

your partner, which directly means if you have two rational people that you're

going to get into the worst situation imaginable. Rational doesn't mean

that you're actually successful. Of course, we know of lots of highly

intelligent people whom I don't want to switch with because they're unsuccessful.

That's really the framework under which we were operating. All of those

green things here, which are the caveats of doing the right thing, are

Teil eines Kapitels:
Chapter 31. What did we learn in AI 1/2?

Zugänglich über

Offener Zugang

Dauer

00:19:14 Min

Aufnahmedatum

2021-03-30

Hochgeladen am

2021-03-31 08:37:46

Sprache

en-US

A recap of a rational agent and a short overview of the topics of AI-2. Finally, topics for AI-3 are mentioned (which is not taught at FAU). 

Einbetten
Wordpress FAU Plugin
iFrame
Teilen